25 research outputs found

    Hemispheric Asymmetries in Speech Perception: Sense, Nonsense and Modulations

    Get PDF
    Background: The well-established left hemisphere specialisation for language processing has long been claimed to be based on a low-level auditory specialization for specific acoustic features in speech, particularly regarding 'rapid temporal processing'.Methodology: A novel analysis/synthesis technique was used to construct a variety of sounds based on simple sentences which could be manipulated in spectro-temporal complexity, and whether they were intelligible or not. All sounds consisted of two noise-excited spectral prominences (based on the lower two formants in the original speech) which could be static or varying in frequency and/or amplitude independently. Dynamically varying both acoustic features based on the same sentence led to intelligible speech but when either or both acoustic features were static, the stimuli were not intelligible. Using the frequency dynamics from one sentence with the amplitude dynamics of another led to unintelligible sounds of comparable spectro-temporal complexity to the intelligible ones. Positron emission tomography (PET) was used to compare which brain regions were active when participants listened to the different sounds.Conclusions: Neural activity to spectral and amplitude modulations sufficient to support speech intelligibility (without actually being intelligible) was seen bilaterally, with a right temporal lobe dominance. A left dominant response was seen only to intelligible sounds. It thus appears that the left hemisphere specialisation for speech is based on the linguistic properties of utterances, not on particular acoustic features

    Tinnitus- and Task-Related Differences in Resting-State Networks

    Get PDF
    We investigated tinnitus-related differences in functional networks in adults with tinnitus by means of a functional connectivity study. Previously it was found that various networks show differences in connectivity in patients with tinnitus compared to controls. How this relates to patients' ongoing tinnitus and whether the ecological sensory environment modulates connectivity remains unknown. Twenty healthy controls and twenty patients suffering from chronic tinnitus were enrolled in this study. Except for the presence of tinnitus in the patient group, all subjects were selected to have normal or near-normal hearing. fMRI data were obtained in two different functional states. In one set of runs, subjects freely viewed emotionally salient movie fragments ("fixed-state") while in the other they were not performing any task ("resting-state"). After data pre-processing, Principal Component Analysis was performed to obtain 25 components for all datasets. These were fed into an Independent Component Analysis (ICA), concatenating the data across both groups and both datasets, to obtain group-level networks of neural origin, each consisting of spatial maps with their respective time-courses. Subject-specific maps and their time-course were obtained by back-projection (Dual Regression). For each of the components a mixed-effects linear model was composed with factors group (tinnitus vs. controls), task (fixed-state vs. resting state) and their interaction. The neural components comprised the visual, sensorimotor, auditory, and limbic systems, the default mode, dorsal attention, executive-control, and frontoparietal networks, and the cerebellum. Most notably, the default mode network (DMN) was less extensive and shows significantly less connectivity in tinnitus patients than in controls. This group difference existed in both paradigms. At the same time, the DMN was stronger during resting-state than during fixed-state in the controls but not the patients. We attribute this pattern to the unremitting engaging effect of the tinnitus percept.</p

    Audiotactile interactions in temporal perception

    Full text link

    Lateralization, connectivity and plasticity in the human central auditory system

    No full text
    Although it is known that responses in the auditory cortex are evoked predominantly contralateral to the side of stimulation, the lateralization of responses at lower levels in the human central auditory system has hardly been studied. Furthermore, little is known on the functional interactions between the involved processing centers. In this study, functional MRI was performed using sound stimuli of varying left and right intensities. In normal hearing subjects, contralateral activation was consistently detected in the temporal lobe, thalamus and midbrain. Connectivity analyses showed that auditory information crosses to the contralateral side in the lower brainstem followed by ipsilateral signal conduction towards the auditory cortex, similar to the flow of auditory signals in other mammals. In unilaterally deaf subjects, activation was more symmetrical for the cortices but remained contralateral in the midbrain and thalamus. Input connection strengths were different only at cortical levels, and there was no evidence for plastic reorganization at subcortical levels. (c) 2005 Elsevier Inc. All rights reserved
    corecore